Transição para Visão Computacional
Hoje, passamos do tratamento de dados simples e estruturados com camadas lineares básicas para lidar com dados de imagem de alta dimensão. Uma única imagem colorida introduz uma complexidade significativa que arquiteturas padrão não conseguem gerenciar de forma eficiente. O aprendizado profundo para visão computacional exige uma abordagem especializada: a Rede Neural Convolucional (CNN).
1. Por que Redes Neurais totalmente conectadas (FCNs) falham
Em uma FCN, cada pixel de entrada deve ser conectado a cada neurônio na camada subsequente. Para imagens de alta resolução, isso resulta em uma explosão computacional, tornando o treinamento inviável e a generalização pobre devido ao superajuste extremo.
- Input Dimension: A standard $224 \times 224$ RGB image results in $150,528$ input features ($224 \times 224 \times 3$).
- Hidden Layer Size: If the first hidden layer uses 1,024 neurons.
- Total Parameters (Layer 1): $\approx 154$ million weights ($150,528 \times 1024$) just for the first connection block, requiring massive memory and compute time.
A Solução CNN
As CNNs resolvem o problema de escalabilidade das FCNs explorando a estrutura espacial das imagens. Elas identificam padrões (como bordas ou curvas) usando filtros pequenos, reduzindo o número de parâmetros por ordens de grandeza e promovendo robustez.
TERMINALbash — model-env
> Ready. Click "Run" to execute.
>
PARAMETER EFFICIENCY INSPECTOR Live
Run comparison to visualize parameter counts.
Question 1
What is the primary benefit of using Local Receptive Fields in CNNs?
Question 2
If a $3 \times 3$ filter is applied across an entire image, what core CNN concept is being utilized?
Question 3
Which CNN component is responsible for progressively reducing the spatial dimensions (width and height) of the feature maps?
Challenge: Identifying Key CNN Components
Relate CNN mechanisms to their functional benefits.
We need to build a vision model that is highly parameter efficient and can recognize an object even if it slightly shifts its position in the image.
Step 1
Which mechanism ensures the network can identify a feature (like a diagonal line) regardless of where it is in the frame?
Solution:
Shared Weights. By using the same filter across all locations, the network learns translation invariance.
Shared Weights. By using the same filter across all locations, the network learns translation invariance.
Step 2
What architectural choice allows a CNN to detect features with fewer parameters than an FCN?
Solution:
Local Receptive Fields (or Sparse Connectivity). Instead of connecting to every pixel, each neuron only connects to a small, localized region of the input.
Local Receptive Fields (or Sparse Connectivity). Instead of connecting to every pixel, each neuron only connects to a small, localized region of the input.
Step 3
How does the CNN structure lead to hierarchical feature learning (e.g., edges $\to$ corners $\to$ objects)?
Solution:
Stacked Layers. Early layers learn simple features (edges) using convolution. Deeper layers combine the outputs of earlier layers to form complex, abstract features (objects).
Stacked Layers. Early layers learn simple features (edges) using convolution. Deeper layers combine the outputs of earlier layers to form complex, abstract features (objects).